Shaping Disability and Spectrum-Centered Algorithms and Ethics
Explore the MOOC, series of policy suggestions and publications addressing AI and Digital Acts, intergovernmental treaties, ontologies of technologies, trilogy of reports and contributions, memo for social awareness.
MOOC: Disability-Centered AI and Ethics
AI algorithms can be used to augment smart wheelchairs, walking sticks, geolocation and city tools, bionic and rehabilitation technologies, turn sign language into text, or visual objects into transcriptions, support cognitive impairments - through expressive computing and social robotics. These algorithms also fuel a range of systems used by the general population (“assistive pretext”):
Despite the significant possibilities of AI, disability is not a monolith, but a social spectrum that requires special attention and may pose challenges for algorithms associated with proper recognition, analysis, predictions and outcomes.
In particular, Al systems can make errors or lack accessibility towards individuals with sensory, cognitive or physical impairments in the context of the workplaces, educational, public or law-enforcement scenarios, including:
Individuals with facial differences or asymmetry, craniofacial syndromes
Different gestures, gesticulation, posture;
Speech impairment, specific communication styles;
Used assistive devices or technologies;
Negative connotations for semantics associated with disability keywords and phrases (e.g. language-based systems);
Be less accurate towards “invisible“ disabilities, women (“gender-blind“) or particular populations.
MOOC explores:
how different AI systems and ontologies, including expressive computing, may interact with different spectrums;
investigate historical and statistical contexts of errors and distortions;
explore approaches to data, models, systems oversight;
explore policies and frameworks to systems categorization, scenarios, risks and oversight.
Exploring ethics of social and assistive algorithms at the Technical University of Munich, London AI Act Summit, The Commission’ Scientific Mechanism, etc
Explore our publications, public letters, signatures and policy suggestions, including the recent AI and digital acts, safety treaties and declarations.
Publications and public letters
Frameworks and resources
Explore the trilogy of report’s contributions through the disability lens, addressing AI in health, education, labor. Find additional sources and frameworks.
Memo and society
Explore the memo, summarizing the key facts, stats and actions. You can use it to bring awareness, add to your sources or toolkits.
We would love to thank intergovernmental organizations, institutions, media, think tanks with whom we cooperated on publications, sessions, repositories, research and public reports to connect more dots in this complex technology and ethics work.
Separate gratitude to: Fengchun Miao and Xianglei Zheng (Unesco), John Tarver (OECD), Maud Stiernet (W3C), Chloe Touzet (formerly OECD), Kave Nouri and Marine Uldry (EDF), Auxane Boch (TUM University), Shada Alsalamah and Rohit Malpani (WHO), Acintya Jayawardena (Mozilla), Nizan Geslevich Packin, Abigail Dubiniecki, Yip Thy Diep Ta and many others.
Special gratitude
If you support our continuous work to reflect all spectrums, cases and scenarios, ontologies of technologies and systems in evolving national and multilateral policies (eg. AI Act and Digital Acts, Safety Treaties and Declarations, Intergovernmental frameworks, recommendations of Councils of Science and Technology, etc), kindly use the form below to become a signatory. We may use it to further support our work.
If you present a ministry, authority, intergovernmental organization, institution, think tank or research/science ecosystem, do not hesitate to refer to the resources above in your works.